Journal of Biomedical Informatics
Top medRxiv preprints most likely to be published in this journal, ranked by match strength.
Show abstract
ObjectiveTo develop a large pretrained clinical language model from scratch using transformer architecture; systematically examine how transformer models of different sizes could help 5 clinical natural language processing (NLP) tasks at different linguistic levels. MethodsWe created a large corpus with >90 billion words from clinical narratives (>82 billion words), scientific literature (6 billion words), and general English text (2.5 billion words). We developed GatorTron models from scratch ...
Show abstract
The natural language text in electronic health records (EHRs), such as clinical notes, often contains information that is not captured elsewhere (e.g., degree of disease progression and responsiveness to treatment) and, thus, is invaluable for downstream clinical analysis. However, to make such data available for broader research purposes, in the United States, personally identifiable information (PII) is typically removed from the EHR in accordance with the Privacy Rule of the Health Insurance ...
Show abstract
ObjectiveSocial and behavioral determinants of health play a critical role in patient outcomes, yet much of this information is documented only in unstructured clinical text rather than structured records. Detecting these factors using natural language processing enables a deeper understanding of patient health and supports better decision-making. This study aims to improve the prediction of Behavioral Determinants of Health (BDoH) from medical records by systematically comparing multiple transf...
Show abstract
BackgroundInteroperable clinical decision support system (CDSS) rules provide a pathway to interoperability, a well-recognized challenge in health information technology. Building an ontology facilitates creating interoperable CDSS rules, which can be achieved by identifying the keyphrases (KP) from the existing literature. Ontology construction is traditionally a manual effort by human domain experts, and the newly advanced natural language processing techniques, such as KP identification, can ...
Show abstract
ObjectiveNamed Entity Recognition (NER) and Biomedical Entity Linking (BEL) are essential for transforming unstructured Electronic Health Records (EHRs) into structured information. However, tools for these tasks are limited in non-English biomedical texts such as Dutch and Italian. This study investigates the use of prompt-based learning with Large Language Models (LLMs) to perform multilingual NER and BEL using minimal domainspecific data, while addressing annotation preservation during corpus...
Show abstract
The aim of the Social Media Mining for Health Applications (#SMM4H) shared tasks is to take a community-driven approach to address the natural language processing and machine learning challenges inherent to utilizing social media data for health informatics. The eighth iteration of the #SMM4H shared tasks was hosted at the AMIA 2023 Annual Symposium and consisted of five tasks that represented various social media platforms (Twitter and Reddit), languages (English and Spanish), methods (binary c...
Show abstract
Despite growing excitement in deploying large language models (LLMs) for healthcare, most machine learning studies show success on the same few limited public data sources. It is unclear if and how most results generalize to real-world clinical settings. To measure this gap and shorten it, we analyzed protected notes from over 100 Veterans Affairs (VA) sites, focusing on extracting smoking history--a persistent and clinically impactful problem in natural language processing (NLP). Here we applie...
Show abstract
ObjectiveMuch medical data is only available in unstructured electronic health records (EHR). These data can be obtained through manual (human) extraction or programmatic natural language processing (NLP) methods. We estimate that NLP only becomes economically competitive with manual extraction when there are ~6500 EHRs records. We have found that there is interest from clinicians and researchers in using NLP on projects with fewer records. We examine whether a large language model (LLM) can be ...
Show abstract
Rapid and automated extraction of clinical information from patients notes is a desirable though difficult task. Natural language processing (NLP) and machine learning have great potential to automate and accelerate such applications, but developing such models can require a large amount of labeled clinical text, which can be a slow and laborious process. To address this gap, we propose the MedDRA tagger, a fast annotation tool that makes use of industrial level libraries such as spaCy, biomedic...
Show abstract
Interoperability in health information systems is crucial for accurate data exchange across environments such as electronic health records, clinical notes, and medical research. The main challenge arises from the wide variation in biomedical concepts, their representation across different systems and languages, and the limited context, complicating data integration and standardization. Inspired by recent advances in large language models (LLMs), this study explores their potential role as biomed...
Show abstract
Pharmacogenomics enables personalized medicine by predicting individual drug responses based on genetic makeup, but complex guideline retrieval remains challenging for clinicians, particularly in resource-limited settings. While large language models (LLMs) show promise for numerous healthcare applications, their performance on domain-specific pharmacogenomics queries without expert knowledge integration remains limited. We evaluated whether Retrieval-Augmented Generation (RAG) enhancement impro...
Show abstract
ObjectiveThe classification of clinical note sections is a critical step before doing more fine-grained natural language processing tasks such as social determinants of health extraction and temporal information extraction. Often, clinical note section classification models that achieve high accuracy for one institution experience a large drop of accuracy when transferred to another institution. The objective of this study is to develop methods that classify clinical note sections under the SOAP...
Show abstract
The automatic extraction of medication mentions from social media data is critical for pharmacovigilance and public health monitoring. In this study, we present an end-to-end generative approach based on instruction-tuned large language models (LLMs) for medication mention extraction from Twitter. Reformulating the task as a text-to-text generation problem, our models achieve state-of-the-art results on both fine-grained span extraction and coarse-grained tweet-level classification, surpassing t...
Show abstract
BackgroundWith the growing volume and complexity of laboratory repositories, it has become tedious to parse unstructured data into structured and tabulated formats for secondary uses such as decision support, quality assurance, and outcome analysis. However, advances in Natural Language Processing (NLP) approaches have enabled efficient and automated extraction of clinically meaningful medical concepts from unstructured reports. ObjectiveIn this study, we aimed to determine the feasibility of u...
Show abstract
PurposeMalnutrition is a serious health concern, particularly among the older people living in residential aged care facilities. An automated and efficient method is required to identify the individuals afflicted with malnutrition in this setting. The recent advancements in transformer-based large language models (LLMs) equipped with sophisticated context-aware embeddings, such as RoBERTa, have significantly improved machine learning performance, particularly in predictive modelling. Enhancing t...
Show abstract
Mapping electronic health records (EHR) data to common data models (CDMs) enables the standardization of clinical records, enhancing interoperability and enabling large-scale, multi-centered clinical investigations. Using 2 large publicly available datasets, we developed transformer-based natural language processing models to map medication-related concepts from the EHR at a large and diverse healthcare system to standard concepts in OMOP CDM. We validated the model outputs against standard conc...
Show abstract
Multi-label classification of unstructured electronic health records (EHR) is challenging due to the semantic complexity of textual data. Identifying the most effective machine learning method for EHR classification is useful in real-world clinical settings. Advances in natural language processing (NLP) using large language models (LLMs) offer promising solutions. Therefore, this experimental research aims to test the effects of zero-shot and few-shot learning prompting, with and without paramet...
Show abstract
Text embeddings convert textual information into numerical representations, enabling machines to perform semantic tasks like information retrieval. Despite its potential, the application of text embeddings in healthcare is underexplored in part due to a lack of benchmarking studies using biomedical data. This study provides a flexible framework for benchmarking embedding models to identify those most effective for healthcare-related semantic tasks. We selected thirty embedding models from the mu...
Show abstract
Adverse drug events (ADEs) are a critical aspect of patient safety and pharmacovigilance, with significant implications for patient outcomes and public health monitoring. The increasing availability of electronic health records, social media, and online patient forums provides valuable yet challenging unstructured data sources for ADE surveillance. To address these challenges, we introduce CONORM, a novel framework integrating named entity recognition (NER) and entity normalization (EN) for ADE ...
Show abstract
ObjectiveTo develop a transformer-based natural language processing (NLP) system for detecting adverse drug events (ADEs) from clinical notes in electronic health records (EHRs). Materials and MethodsWe fine-tuned BERT Short-Formers and Clinical-Longformer using the processed dataset of the 2018 National NLP Clinical Challenges (n2c2) shared task Track 2. We investigated two data processing methods, window-based and split-based approaches, to find an optimal processing method. We evaluated the ...